On the Rates of Convergence from Surrogate Risk Minimizers to the Bayes Optimal Classifier

نویسندگان

  • Jingwei Zhang
  • Tongliang Liu
  • Dacheng Tao
چکیده

We study the rates of convergence from empirical surrogate risk minimizers to the Bayes optimal classifier. Specifically, we introduce the notion of consistency intensity to characterize a surrogate loss function and exploit this notion to obtain the rate of convergence from an empirical surrogate risk minimizer to the Bayes optimal classifier, enabling fair comparisons of the excess risks of different surrogate risk minimizers. The main result of the paper has practical implications including (1) showing that hinge loss is superior to logistic and exponential loss in the sense that its empirical minimizer converges faster to the Bayes optimal classifier and (2) guiding to modify surrogate loss functions to accelerate the convergence to the Bayes optimal classifier. UBTECH Sydney AI Centre and the School of Information Technologies in the Faculty of Engineering and Information Technologies at The University of Sydney, NSW, 2006, Australia, [email protected], [email protected], [email protected]

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Intelligent and Robust Genetic Algorithm Based Classifier

The concepts of robust classification and intelligently controlling the search process of genetic algorithm (GA) are introduced and integrated with a conventional genetic classifier for development of a new version of it, which is called Intelligent and Robust GA-classifier (IRGA-classifier). It can efficiently approximate the decision hyperplanes in the feature space. It is shown experime...

متن کامل

A New Approach for Text Documents Classification with Invasive Weed Optimization and Naive Bayes Classifier

With the fast increase of the documents, using Text Document Classification (TDC) methods has become a crucial matter. This paper presented a hybrid model of Invasive Weed Optimization (IWO) and Naive Bayes (NB) classifier (IWO-NB) for Feature Selection (FS) in order to reduce the big size of features space in TDC. TDC includes different actions such as text processing, feature extraction, form...

متن کامل

Asymptotic Optimal Empirical Bayes Estimation of the Parameter of ЭРланга Distribution

This paper aims to study the empirical Bayes estimation of the parameter of ЭРланга distribution under a weighted squared error loss function. Bayes estimator is firstly to derive based on pivot method. Then empirical Bayes estimator of unknown parameter is constructed in a priori unknown circumstances. The asymptotically optimal property of this empirical Bayes estimator is also discussed. It ...

متن کامل

Active Learning with Partially Labeled Data via Bias Reduction

With active learning the learner participates in the process of selecting instances so as to speed-up convergence to the “best” model. This paper presents a principled method of instance selection based on the recent bias variance decomposition work for a 0-1 loss function. We focus on bias reduction to reduce 0-1 loss by using an approximation to the optimal Bayes classifier to calculate the b...

متن کامل

On the Adaptive Properties of Decision Trees

Decision trees are surprisingly adaptive in three important respects: They automatically (1) adapt to favorable conditions near the Bayes decision boundary; (2) focus on data distributed on lower dimensional manifolds; (3) reject irrelevant features. In this paper we examine a decision tree based on dyadic splits that adapts to each of these conditions to achieve minimax optimal rates of conver...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:
  • CoRR

دوره abs/1802.03688  شماره 

صفحات  -

تاریخ انتشار 2018